Goto

Collaborating Authors

 density estimation & convergence rate


Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses

Neural Information Processing Systems

We study the problem of estimating a nonparametric probability distribution under a family of losses called Besov IPMs. This family is quite large, including, for example, L^p distances, total variation distance, and generalizations of both Wasserstein (earthmover's) and Kolmogorov-Smirnov distances. For a wide variety of settings, we provide both lower and upper bounds, identifying precisely how the choice of loss function and assumptions on the data distribution interact to determine the mini-max optimal convergence rate. We also show that, in many cases, linear distribution estimates, such as the empirical distribution or kernel density estimator, cannot converge at the optimal rate. These bounds generalize, unify, or improve on several recent and classical results. Moreover, IPMs can be used to formalize a statistical model of generative adversarial networks (GANs). Thus, we show how our results imply bounds on the statistical error of a GAN, showing, for example, that, in many cases, GANs can strictly outperform the best linear estimator.


Reviews: Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses

Neural Information Processing Systems

Its structure and organization could be substantially improved. The notation is unclear, and the terminology is not defined. For example, see lines 45-48. The formal problem statement (section 2.2) is vague, as well. Many technical terms are used without any context; they are not explained and, further, it is not clear how those concepts help the claims made in the paper.


Reviews: Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses

Neural Information Processing Systems

The reviewers agree that this will make a good contribution to NeurIPS. Please read the reviewer suggestions and try to incorporate them into the final submission.


Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses

Neural Information Processing Systems

We study the problem of estimating a nonparametric probability distribution under a family of losses called Besov IPMs. This family is quite large, including, for example, L p distances, total variation distance, and generalizations of both Wasserstein (earthmover's) and Kolmogorov-Smirnov distances. For a wide variety of settings, we provide both lower and upper bounds, identifying precisely how the choice of loss function and assumptions on the data distribution interact to determine the mini-max optimal convergence rate. We also show that, in many cases, linear distribution estimates, such as the empirical distribution or kernel density estimator, cannot converge at the optimal rate. These bounds generalize, unify, or improve on several recent and classical results.


Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM Losses

Uppal, Ananya, Singh, Shashank, Poczos, Barnabas

Neural Information Processing Systems

We study the problem of estimating a nonparametric probability distribution under a family of losses called Besov IPMs. This family is quite large, including, for example, L p distances, total variation distance, and generalizations of both Wasserstein (earthmover's) and Kolmogorov-Smirnov distances. For a wide variety of settings, we provide both lower and upper bounds, identifying precisely how the choice of loss function and assumptions on the data distribution interact to determine the mini-max optimal convergence rate. We also show that, in many cases, linear distribution estimates, such as the empirical distribution or kernel density estimator, cannot converge at the optimal rate. These bounds generalize, unify, or improve on several recent and classical results.